期刊影响因素(JIF)通常等同于期刊质量和提交给该期刊的论文的同行评审质量。我们通过分析提交给1,644家医学和生命科学期刊的10,000个同行评审报告,研究了同行评审与JIF的内容之间的关联。两名研究人员手工编码了2,000个句子的随机样本。然后,我们训练了机器学习模型,以将所有187,240个句子分类为贡献或不为内容类别做出贡献。我们研究了JIF DICILES定义的十组期刊与使用线性混合效应模型的同行评审的内容之间的关联,并调整了评论的长度。 JIF的范围为0.21至74.70。同行评审长度从最低(单词中位数185)增加到JIF组(387个单词)。分配给不同内容类别的句子的比例甚至在JIF组中也有很大变化。为了彻底,与最低的JIF组相比,关于“材料和方法”的句子在最高的JIF期刊中更为普遍(7.8个百分点; 95%CI 4.9至10.7%)。 “演示和报告”的趋势朝相反的方向发展,最高的JIF期刊对此类内容的重视程度较小(差异-8.9%; 95%CI -11.3至-6.5%)。为了有助于,对更高的JIF期刊的评论更少关注“建议和解决方案”,而提供的示例少于较低的影响因素期刊。对于其他内容类别而言,没有,或者只有很小的差异。总之,在讨论使用的方法时,在提出解决方案和提供示例方面,在讨论所使用的方法但较小的帮助时,较高的JIF期刊的同行评审往往更为透彻。差异是适度的,可变性很高,表明JIF是对单个手稿的同伴评论质量的不良预测指标。
translated by 谷歌翻译
基于激光雷达的位置识别是自动驾驶汽车和机器人应用程序中全球本地化的关键组成部分之一。随着DL方法在从3D激光雷达的学习有用信息方面的成功中,Place识别也从这种方式中受益,这导致了更高的重新定位和循环闭合检测性能,尤其是在具有重大变化条件的环境中。尽管在该领域取得了进展,但从3D激光雷达数据中提取适当有效的描述符,这些数据不变,而不断变化的条件和方向仍然是未解决的挑战。为了解决这个问题,这项工作提出了一个基于3D激光雷达的新型深度学习网络(名为ATTDLNET),该网络使用基于范围的代理表示点云和具有堆叠注意力层的注意力网络,以选择性地专注于远程上下文和Inter Inter - 特征关系。在KITTI数据集中对拟议的网络进行了训练和验证,并提供了消融研究以评估新的注意力网络。结果表明,增加对网络的关注会提高性能,从而导致有效的循环封闭,并优于已建立的基于3D激光雷达的位置识别方法。从消融研究中,结果表明中间编码器层的平均性能最高,而更深的层对方向的变化更为强大。该代码可在https://github.com/cybonic/attdlnet上公开获取
translated by 谷歌翻译
Landing an unmanned aerial vehicle unmanned aerial vehicle (UAV) on top of an unmanned surface vehicle (USV) in harsh open waters is a challenging problem, owing to forces that can damage the UAV due to a severe roll and/or pitch angle of the USV during touchdown. To tackle this, we propose a novel model predictive control (MPC) approach enabling a UAV to land autonomously on a USV in these harsh conditions. The MPC employs a novel objective function and an online decomposition of the oscillatory motion of the vessel to predict, attempt, and accomplish the landing during near-zero tilt of the landing platform. The nonlinear prediction of the motion of the vessel is performed using visual data from an onboard camera. Therefore, the system does not require any communication with the USV or a control station. The proposed method was analyzed in numerous robotics simulations in harsh and extreme conditions and further validated in various real-world scenarios.
translated by 谷歌翻译
We describe a Physics-Informed Neural Network (PINN) that simulates the flow induced by the astronomical tide in a synthetic port channel, with dimensions based on the Santos - S\~ao Vicente - Bertioga Estuarine System. PINN models aim to combine the knowledge of physical systems and data-driven machine learning models. This is done by training a neural network to minimize the residuals of the governing equations in sample points. In this work, our flow is governed by the Navier-Stokes equations with some approximations. There are two main novelties in this paper. First, we design our model to assume that the flow is periodic in time, which is not feasible in conventional simulation methods. Second, we evaluate the benefit of resampling the function evaluation points during training, which has a near zero computational cost and has been verified to improve the final model, especially for small batch sizes. Finally, we discuss some limitations of the approximations used in the Navier-Stokes equations regarding the modeling of turbulence and how it interacts with PINNs.
translated by 谷歌翻译
Language modeling, a central task in natural language processing, involves estimating a probability distribution over strings. In most cases, the estimated distribution sums to 1 over all finite strings. However, in some pathological cases, probability mass can ``leak'' onto the set of infinite sequences. In order to characterize the notion of leakage more precisely, this paper offers a measure-theoretic treatment of language modeling. We prove that many popular language model families are in fact tight, meaning that they will not leak in this sense. We also generalize characterizations of tightness proposed in previous works.
translated by 谷歌翻译
After just a few hundred training updates, a standard probabilistic model for language generation has likely not yet learnt many semantic or syntactic rules of natural language, which inherently makes it difficult to estimate the right probability distribution over next tokens. Yet around this point, these models have identified a simple, loss-minimising behaviour: to output the unigram distribution of the target training corpus. The use of such a crude heuristic raises the question: Rather than wasting precious compute resources and model capacity for learning this strategy at early training stages, can we initialise our models with this behaviour? Here, we show that we can effectively endow our model with a separate module that reflects unigram frequency statistics as prior knowledge. Standard neural language generation architectures offer a natural opportunity for implementing this idea: by initialising the bias term in a model's final linear layer with the log-unigram distribution. Experiments in neural machine translation demonstrate that this simple technique: (i) improves learning efficiency; (ii) achieves better overall performance; and (iii) appears to disentangle strong frequency effects, encouraging the model to specialise in non-frequency-related aspects of language.
translated by 谷歌翻译
In this work, we investigate the representation capacity of multilayer perceptron networks that use the sine as activation function - sinusoidal neural networks. We show that the layer composition in such networks compacts information. For this, we prove that the composition of sinusoidal layers expands as a sum of sines consisting of a large number of new frequencies given by linear combinations of the weights of the network's first layer. We provide the expression of the corresponding amplitudes in terms of the Bessel functions and give an upper bound for them that can be used to control the resulting approximation.
translated by 谷歌翻译
In this paper, we seek to measure how much information a component in a neural network could extract from the representations fed into it. Our work stands in contrast to prior probing work, most of which investigates how much information a model's representations contain. This shift in perspective leads us to propose a new principle for probing, the architectural bottleneck principle: In order to estimate how much information a given component could extract, a probe should look exactly like the component. Relying on this principle, we estimate how much syntactic information is available to transformers through our attentional probe, a probe that exactly resembles a transformer's self-attention head. Experimentally, we find that, in three models (BERT, ALBERT, and RoBERTa), a sentence's syntax tree is mostly extractable by our probe, suggesting these models have access to syntactic information while composing their contextual representations. Whether this information is actually used by these models, however, remains an open question.
translated by 谷歌翻译
Despite being responsible for state-of-the-art results in several computer vision and natural language processing tasks, neural networks have faced harsh criticism due to some of their current shortcomings. One of them is that neural networks are correlation machines prone to model biases within the data instead of focusing on actual useful causal relationships. This problem is particularly serious in application domains affected by aspects such as race, gender, and age. To prevent models from incurring on unfair decision-making, the AI community has concentrated efforts in correcting algorithmic biases, giving rise to the research area now widely known as fairness in AI. In this survey paper, we provide an in-depth overview of the main debiasing methods for fairness-aware neural networks in the context of vision and language research. We propose a novel taxonomy to better organize the literature on debiasing methods for fairness, and we discuss the current challenges, trends, and important future work directions for the interested researcher and practitioner.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译